Fall 2022 name change from 498-NS to 410-NS This list from from spring of 2021, Jan-May Lec Date @time&Topic 1 1/25/2021 not very useful lecture from Room 1013ECEB (massive technical difficulties) 2 1/27/2021 Summary lecture on how to record spikes (@40:00mins) NL lumped diffusion line model (@19:20) Helmholtz measure speed of spikes as 27 m/s early 1800's (1/10 speed of sound) The speed of light is 300e8 Not 3e5 Cole measured the shape of the pulse in 1938 using WWII electronics The brain is a logic analyzer, the ultimate information processor; Shannon and his information theory This class is about information processing; the math is important but not necessary Squid single fiber vs rabbit 375 fibers (wires) (page 6 Scott) Myelin @57:00 mins Discuss flaw of candle model of spike propagation @42:00 Brief disc of 2 diode model (@45:00) Transistors are voltage controlled diodes (48:00) 3 1/29/2021 History: McCulloch-Pitts (1943) (Logic processing: Year) @10m; feedback (circles) Properties of the "broken brain"; how we learn is when something breaks MP: model; Cajal, Lorente deNo: Importance of Dendritic trees @2:00m Org of brain: cerebellum motor control; crude discussion of organization of the brain. @11:30 delay and add of neural spikes; information processing leading to a single spike @13:00m; importance of myelin @14m 1 kHz max spike rate @13:00 feedback amplifier @16:00 Role of lungs to cool the body @18:00 stability (Nyquist thm) @20:00 Boolean analysis, built in; Speech perception; Elmo @22:00 Cajal refuses to let deNo publish feedback; Science is about going for the truth @23:00 Perception can learn (recognize simple patterns), but not generalize this learning @24:00 JND's and its importance (Psychophysics); Weiner's role @26:00 Wiener filter @27:00 feedback, positive vs negative (Scott) @23:00 positive feedback and unstable responses; exp response @31:00 limits to growth study; stable system with positive feedback @33:00 global warming: limits of growth @35:00 short summary of history (book 11-19, history) @38:00 This course will teach you a heck of a lot about how the brain works @39:00 role of the homework problems. @40:00 C. Koch, his book and the Allen Inst. @42:00 Minsky and Pappert: "The perceptron" it cannot work. (MP, Rosenblatt; MinskyPapert) Hinton brought the Perceptron back. We will Jump to chapter 12 in Lec 4. Why? We should talk about the @45:00 First: Structure of a neuron @47:43 Nodes of Ranvier @49:00 Mathematics is a language; Bees have a density 10 to 100 times more dense Lec 4 Move to Ch 12. What is science and how does it work?; Interesting bu not very useful 12.1.1: Reductionism: How does science work. Helmholtz's contributions; Cole's spike shape: @9:00; HodgHuxley model; role of technology @11:00 a small number of people make most of the discovery's @13:00 Gauss' contributions; @14:00 Is reductionism flawed? (makes no sense to me) @16:00 Evolution and mutations of Covid-19 @17:00 God vs evolution; evolution is a creeping theory. Its not really a science. Its a theory. @22:00 The google 10e100 is simply a very large number, but otherwise has no significance @23:00 There are (only) 20 amino acids; proteins are a sequence of amino acids @25:00 the argument against reductionism: Why are these the 20 amino acids????? @27: string 20 amino acids together to create proteins: why the 20 we have??? @20:00 There are too many unknowns; @30.00 There is simply too much we don't know @34:00 here is a fact: There is something called true. DNA can be sequenced. Boolean algebra is an algebra of logic. That's truth. Eventually the truth comes out. Example of deNo's theory of feedback in the brain 38:00 Death of old wrong ideas and the birth of new ones. Its about truth. 40:00 auditory processing: Fletcher's model of auditory processing 45:00 Ga, Da, Tha example: We know what the brain is doing Lec 5 @2:15 Inner workings of a neuron General discussion of the books presentation of the generic neuron @6:00 brain subsystems @7:00 Utility of negative feedback @10:00 currents of dendritic trees sum to produce output spike (sum and add) @11:30 how to measure the information signal to noise ratio via the error rates (information theory) @13:28 cell voltage properties; summation of spikes; Threshold reached, neuron fires @14:30 events (output) @15:43 Dendritic tree of inputs (p. 208) and Boolean responses @16:45 intra-cell voltages; convergence of spikes at branches. @19:00 pukinji cell (p. 208) of cerebellum (detailed discussion of Cajal's drawing) @22:30 how does the neuron works: spike propagation by Hodg-Huxly (2.2.2) NL Diffusion equation @27:00 Membrane model of a diffusion line: 2x2 matrix formulation (HW); @29:00 Role of conservation of energy: zero loss with zero heat: Capacitors are lossless (switch-cap). @31:00 Sigma-Delta codec ($3 with 20 bit dynamic range). 3*32 = 96 dB dynamic range @34:00 Story of 20 to 1 bit converter equiv of 24 bits (@36:00) #37:00 How does Sigma-Delta work? @38:00 Oversampling by 500 (or so). Clock jitter. @42:00 Feedback with LP Filter in the loop ==> "noise shaping": Push all the noise to higher frequencies @44:00 I speculate that neurons are doing noise-shaping. @46:00 Sigma-delta pushes the limits, approaching the brain signal processing (its not as good) @47:00 Hair-cells measure their own Brownian motion (Winfred Denk and Watt Webb). We are not even close to what nature is doing. Fidelity of vision and hearing is amazing. P=4kTB Lec 6, Feb 5, 2021 Non-linear diffusion lines @0:16: 2 diode model of spike propagation @02:00 Lipid bilayers (fat cells) analogy of skin around the body (p. 50) (Not being shown on the 360 video) @05:00 drawing on blackboard. Electrical properties. As a tube its a neuron. @06:42 share my screen from book pages @07:00 1 uFd/cm^2; @9:30 discussion of 100 uF cap and their limitations. 11:00 ion channels for Na and K 13:00 2x2 transmission matrix model of diffusion line 14:30 Nyquist as spacial sampling (Helmholtz 1850, first measurement of spike velocity) 16:31 tau = RC (time constant of membrane patch) L=0.1 mm, Rad= 2 \mu-m 19:00 how to model the spike propagation: ABCD matrix method described. 24:00 Ohm's law to derive the 2x2 matrix equations 30:00 comparison to LC transmission line; Char impedance and velocity on TL (see book) 34:00 eigenvalue analysis is the key tool to understanding these circuits 36:00 (Mention Green's function) convolution (page 32, eq 2.3) 40:00 easily stated facts, followed by reality (1950, Hodg-Huxley squid axon): rest potential -65 [mv] 41:30 skin is important but we don't need to talk about it. You can die of neural failures. 45:30 Neural spikes are NL propagation 46:30 HH model looks like a diode-battery circuit, for Na and K molecules in water. 48:00 Why HH took the inside of the squid axon as reference ground. 49:00 K and Na channels and how they work across the lipid bilayer 50:30 Diode version of HH model @51:33 output of the diode model Lec 7, Feb 8 02:00 multi vs uni polar neurons 04:00 discussion of the anatomy of a generic cell 10:00 dendritic trees; spike shape as a function of time (details) 12:00 Nodes of Ranvier (myelin) 14:00 more pictures of neuron body and myelin sheath 24:00 impedance of a capacitance and resistors 28:00 vesicles at the synapse (having problems with my browser) 30:00 how to get to Lec 7 from web-page 34:00 synapse discussion 37:00 voltages in the pre and postsynaptic pre-myelin part of neuron 38:00 dyes and stains to visualize multi-neuron slices of cells 41:00 guided vesicles containing neurotransmitters to synapse 47:30 diagram of Na and K concentration across neuron during spike propagation. <===**** 49:00 Q(t)=CV(t), current = dQ/dt. 50;00 Basic equations of voltage vs currents across cell membrane. 52:00 Summary discussion (end at 53:00) Lec 8, Feb 10, 2021 04:00 Boolean Hilbert space 04:30 What does the brain do? Ans: Logic 07:00 Seeing a child learn: What is teaching? Not always accepted. Examples. The social contract. 08:00 Not everyone wants to follow the rules. 09:00 You need to learn to read, unlike learning to talk. 10:00 McPitts paper. incomprehensible? Widely cited, thus important. 11:24 Rosenblatt (1958) and the introduction of the perceptron; huge step forward. How do we learn. 13:30 Boolean algebra rules: Summary 18:00 Numbers and their definitions 20:00 Set of Boolean numbers {1,0} 21:00 Is Boolean linear or nonlinear? I don't know. 22:30 How large is the output space of Boolean (p. 236). 23:00 Boolean processing is not limited to humans (this seems important) 24:00 Dogs learn (its obvious); Birds can count. 25:00 Bits are the units of Boolean algebra 26:00 state diagrams. driven by a clock. Eigen states. 29:00 counting states M=2^N = (M^N)^N distinct functions 33:00 Rosenblatt: Eq. 10.2 (page 237): 35:00 Integrate and ``fire'' and learning 37:00 Hilbert space of Boolean responses (hyperplane defined as a scalar product) 40:00 What is a "dot" product? What does it mean? How can it be generalized? 42:00 projection of one vector on the other. How much of A is in the direction of B? 42:30 wedge product (rare but important) 46:00 training algorithm: How to do it right? Only one way! 48:00 shoes in bed (never happens. Why?). The one "right" way is unique. 49:00 What is hard-wired? Lec 9 Feb 10 1:30 Bounds on computational capacity (p. 242) 7:00 Minsky and Pappert: proved the perceptron cant work (but it does). 9:00 Model Vector space model (screen not being shown. M=2^N number of neurons Number of states M^N 12:00 p 243 footnote: "perhaps every neuron does not represent a state" 16:00 Fig 10.1 showing Boolean ("support vector machine") 18:00 This is a Boolean space. Scott does a great job of explaining it. 21:00 Back-propagation 24:00 Scott's reading of Boolean processing. Nets with circles yet to be considered Example of learning how to use a drinking fountain (learning with feedback) 27:00 Ca is deposited at the learned synapse. 31:00 Cant really apply science to this. At least not in this century. 33:00 CRISPER defined 49:00 For Lec 10, we will discuss practical stuff, like how the eye and ear work.